accelerometer data
REMONI: An Autonomous System Integrating Wearables and Multimodal Large Language Models for Enhanced Remote Health Monitoring
Ho, Thanh Cong, Kharrat, Farah, Abid, Abderrazek, Karray, Fakhri
With the widespread adoption of wearable devices in our daily lives, the demand and appeal for remote patient monitoring have significantly increased. Most research in this field has concentrated on collecting sensor data, visualizing it, and analyzing it to detect anomalies in specific diseases such as diabetes, heart disease and depression. However, this domain has a notable gap in the aspect of human-machine interaction. This paper proposes REMONI, an autonomous REmote health MONItoring system that integrates multimodal large language models (MLLMs), the Internet of Things (IoT), and wearable devices. The system automatically and continuously collects vital signs, accelerometer data from a special wearable (such as a smartwatch), and visual data in patient video clips collected from cameras. This data is processed by an anomaly detection module, which includes a fall detection model and algorithms to identify and alert caregivers of the patient's emergency conditions. A distinctive feature of our proposed system is the natural language processing component, developed with MLLMs capable of detecting and recognizing a patient's activity and emotion while responding to healthcare worker's inquiries. Additionally, prompt engineering is employed to integrate all patient information seamlessly. As a result, doctors and nurses can access real-time vital signs and the patient's current state and mood by interacting with an intelligent agent through a user-friendly web application. Our experiments demonstrate that our system is implementable and scalable for real-life scenarios, potentially reducing the workload of medical professionals and healthcare costs. A full-fledged prototype illustrating the functionalities of the system has been developed and being tested to demonstrate the robustness of its various capabilities.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Diagnostic Medicine > Vital Signs (0.90)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.68)
OpenTSLM: Time-Series Language Models for Reasoning over Multivariate Medical Text- and Time-Series Data
Langer, Patrick, Kaar, Thomas, Rosenblattl, Max, Xu, Maxwell A., Chow, Winnie, Maritsch, Martin, Verma, Aradhana, Han, Brian, Kim, Daniel Seung, Chubb, Henry, Ceresnak, Scott, Zahedivash, Aydin, Sandhu, Alexander Tarlochan Singh, Rodriguez, Fatima, McDuff, Daniel, Fleisch, Elgar, Aalami, Oliver, Barata, Filipe, Schmiedmayer, Paul
LLMs have emerged as powerful tools for interpreting multimodal data. In medicine, they hold particular promise for synthesizing large volumes of clinical information into actionable insights and digital health applications. Yet, a major limitation remains their inability to handle time series. To overcome this gap, we present OpenTSLM, a family of Time Series Language Models (TSLMs) created by integrating time series as a native modality to pretrained LLMs, enabling reasoning over multiple time series of any length. We investigate two architectures for OpenTSLM. The first, OpenTSLM-SoftPrompt, models time series implicitly by concatenating learnable time series tokens with text tokens via soft prompting. Although parameter-efficient, we hypothesize that explicit time series modeling scales better and outperforms implicit approaches. We thus introduce OpenTSLM-Flamingo, which integrates time series with text via cross-attention. We benchmark both variants against baselines that treat time series as text tokens or plots, across a suite of text-time-series Chain-of-Thought (CoT) reasoning tasks. We introduce three datasets: HAR-CoT, Sleep-CoT, and ECG-QA-CoT. Across all, OpenTSLM models outperform baselines, reaching 69.9 F1 in sleep staging and 65.4 in HAR, compared to 9.05 and 52.2 for finetuned text-only models. Notably, even 1B-parameter OpenTSLM models surpass GPT-4o (15.47 and 2.95). OpenTSLM-Flamingo matches OpenTSLM-SoftPrompt in performance and outperforms on longer sequences, while maintaining stable memory requirements. By contrast, SoftPrompt grows exponentially in memory with sequence length, requiring around 110 GB compared to 40 GB VRAM when training on ECG-QA with LLaMA-3B. Expert reviews by clinicians find strong reasoning capabilities exhibited by OpenTSLMs on ECG-QA. To facilitate further research, we provide all code, datasets, and models open-source.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Nepal (0.04)
ActiNet: Activity intensity classification of wrist-worn accelerometers using self-supervised deep learning
Acquah, Aidan, Chan, Shing, Doherty, Aiden
The use of reliable and accurate human activity recognition (HAR) models on passively collected wrist - accelerometer data is essential in large - scale epidemiological studies that investigate the association between physical activity and health outcomes . While the use of self - supervised learning has generated considerable e xcitement in improving HAR, it remains unknown to what extent th ese models, coupled with hidden Markov models (HMMs), would make a tangible improvement to classification performance and the effect this may have on the predicted daily activity intensity compositions . Us ing 151 CAPTURE - 24 participants' data, we trained the ActiNet model, a self - supervised, 18 - layer, modified ResNet - V2 model, followed by hidden Markov model (HMM) smoothing to classify labels of activity intensity . The performance of this model, evaluated using 5 - fold stratified group cross - validation, was then compared to a baseline random forest (RF) + HMM, established in existing literature . Differences in performance and classification outputs were compared with different subgroups of age and sex within the Capture - 24 population. The ActiNet model was able to distinguish labels of activity intensity with a mean macro F1 score of 0.82 and a mean Cohen's kappa score of 0.86 . This exceeded the performance of the RF + HMM, trained and validated on the same dataset, with mean scores of 0.77 and 0.81, respectively . These findings were consistent across subgroups of age and sex. These findings encourage the use of ActiNet for the extraction of activity intensity labels from wrist - accelerometer data in future epidemiological studies.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Oceania > Australia > New South Wales > Sydney (0.04)
- Health & Medicine > Consumer Health (0.70)
- Health & Medicine > Epidemiology (0.68)
- Health & Medicine > Therapeutic Area (0.46)
Watch Your Step: A Cost-Sensitive Framework for Accelerometer-Based Fall Detection in Real-World Streaming Scenarios
Aderinola, Timilehin B., Palmerini, Luca, D'Ascanio, Ilaria, Chiari, Lorenzo, Klenk, Jochen, Becker, Clemens, Caulfield, Brian, Ifrim, Georgiana
Abstract-- Real-time fall detection is crucial for enabling timely interventions and mitigating the severe health consequences of falls, particularly in older adults. However, existing methods often rely on simulated data or assumptions such as prior knowledge of fall events, limiting their real-world applicability. Practical deployment also requires efficient computation and robust evaluation metrics tailored to continuous monitoring. This paper presents a real-time fall detection framework for continuous monitoring without prior knowledge of fall events. Using over 60 hours of inertial measurement unit (IMU) data from the FARSEEING real-world falls dataset, we employ recent efficient classifiers to compute fall probabilities in streaming mode. To enhance robustness, we introduce a cost-sensitive learning strategy that tunes the decision threshold using a cost function reflecting the higher risk of missed falls compared to false alarms. Unlike many methods that achieve high recall only at the cost of precision, our framework achieved Recall of 1.00, Precision of 0.84, and an F These results demonstrate that cost-sensitive threshold tuning enhances the robustness of accelerometer-based fall detection. They also highlight the potential of our computationally efficient framework for deployment in real-time wearable sensor systems for continuous monitoring. A fall is an event that results in a person coming to rest unintentionally on the ground, floor, or other lower level [1].
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- North America > United States (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (2 more...)
An Explainable AI based approach for Monitoring Animal Health
Jana, Rahul, Dixit, Shubham, Sharma, Mrityunjay, Kumar, Ritesh
Monitoring cattle health and optimizing yield are key challenges faced by dairy farmers due to difficulties in tracking all animals on the farm. This work aims to showcase modern data-driven farming practices based on explainable machine learning(ML) methods that explain the activity and behaviour of dairy cattle (cows). Continuous data collection of 3-axis accelerometer sensors and usage of robust ML methodologies and algorithms, provide farmers and researchers with actionable information on cattle activity, allowing farmers to make informed decisions and incorporate sustainable practices. This study utilizes Bluetooth-based Internet of Things (IoT) devices and 4G networks for seamless data transmission, immediate analysis, inference generation, and explains the models performance with explainability frameworks. Special emphasis is put on the pre-processing of the accelerometers time series data, including the extraction of statistical characteristics, signal processing techniques, and lag-based features using the sliding window technique. Various hyperparameter-optimized ML models are evaluated across varying window lengths for activity classification. The k-nearest neighbour Classifier achieved the best performance, with AUC of mean 0.98 and standard deviation of 0.0026 on the training set and 0.99 on testing set). In order to ensure transparency, Explainable AI based frameworks such as SHAP is used to interpret feature importance that can be understood and used by practitioners. A detailed comparison of the important features, along with the stability analysis of selected features, supports development of explainable and practical ML models for sustainable livestock management.
- Asia > India > Chandigarh (0.04)
- Asia > India > Uttar Pradesh (0.04)
- Asia > India > Punjab (0.04)
- Asia > India > Himachal Pradesh > Shimla (0.04)
- Food & Agriculture > Agriculture (1.00)
- Health & Medicine > Consumer Health (0.93)
Mindfulness Meditation and Respiration: Accelerometer-Based Respiration Rate and Mindfulness Progress Estimation to Enhance App Engagement and Mindfulness Skills
Khan, Mohammad Nur Hossain, creswell, David, Albert, Jordan, O'Connell, Patrick, Fallon, Shawn, Polowitz, Mathew, Xu, Xuhai "orson", islam, Bashima
Mindfulness training is widely recognized for its benefits in reducing depression, anxiety, and loneliness. With the rise of smartphone-based mindfulness apps, digital meditation has become more accessible, but sustaining long-term user engagement remains a challenge. This paper explores whether respiration biosignal feedback and mindfulness skill estimation enhance system usability and skill development. We develop a smartphone's accelerometer-based respiration tracking algorithm, eliminating the need for additional wearables. Unlike existing methods, our approach accurately captures slow breathing patterns typical of mindfulness meditation. Additionally, we introduce the first quantitative framework to estimate mindfulness skills-concentration, sensory clarity, and equanimity-based on accelerometer-derived respiration data. We develop and test our algorithms on 261 mindfulness sessions in both controlled and real-world settings. A user study comparing an experimental group receiving biosignal feedback with a control group using a standard app shows that respiration feedback enhances system usability. Our respiration tracking model achieves a mean absolute error (MAE) of 1.6 breaths per minute, closely aligning with ground truth data, while our mindfulness skill estimation attains F1 scores of 80-84% in tracking skill progression. By integrating respiration tracking and mindfulness estimation into a commercial app, we demonstrate the potential of smartphone sensors to enhance digital mindfulness training.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Asia > Middle East > Jordan (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Research Report > Strength Medium (1.00)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Consumer Health (1.00)
Classification of Cattle Behavior and Detection of Heat (Estrus) using Sensor Data
Dhakshinamoorthy, Druva, Jha, Avikshit, Majumdar, Sabyasachi, Ghosh, Devdulal, Chakraborty, Ranjita, Ray, Hena
This paper presents a novel system for monitoring cattle behavior and detecting estrus (heat) periods using sensor data and machine learning. We designed and deployed a low-cost Bluetooth-based neck collar equipped with accelerometer and gyroscope sensors to capture real-time behavioral data from real cows, which was synced to the cloud. A labeled dataset was created using synchronized CCTV footage to annotate behaviors such as feeding, rumination, lying, and others. We evaluated multiple machine learning models -- Support Vector Machines (SVM), Random Forests (RF), and Convolutional Neural Networks (CNN) -- for behavior classification. Additionally, we implemented a Long Short-Term Memory (LSTM) model for estrus detection using behavioral patterns and anomaly detection. Our system achieved over 93% behavior classification accuracy and 96% estrus detection accuracy on a limited test set. The approach offers a scalable and accessible solution for precision livestock monitoring, especially in resource-constrained environments.
- Asia > India > West Bengal > Kolkata (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > India > Goa (0.04)
Event Classification of Accelerometer Data for Industrial Package Monitoring with Embedded Deep Learning
Renault, Manon, Younes, Hamoud, Tessier, Hugo, Roy, Ronan Le, Pasdeloup, Bastien, Léonardon, Mathieu
Package monitoring is an important topic in industrial applications, with significant implications for operational efficiency and ecological sustainability. In this study, we propose an approach that employs an embedded system, placed on reusable packages, to detect their state (on a Forklift, in a Truck, or in an undetermined location). We aim to design a system with a lifespan of several years, corresponding to the lifespan of reusable packages. Our analysis demonstrates that maximizing device lifespan requires minimizing wake time. We propose a pipeline that includes data processing, training, and evaluation of the deep learning model designed for imbalanced, multiclass time series data collected from an embedded sensor. The method uses a one-dimensional Convolutional Neural Network architecture to classify accelerometer data from the IoT device. Before training, two data augmentation techniques are tested to solve the imbalance problem of the dataset: the Synthetic Minority Oversampling TEchnique and the ADAptive SYNthetic sampling approach. After training, compression techniques are implemented to have a small model size. On the considered twoclass problem, the methodology yields a precision of 94.54% for the first class and 95.83% for the second class, while compression techniques reduce the model size by a factor of four. The trained model is deployed on the IoT device, where it operates with a power consumption of 316 mW during inference.
AI-Generated Fall Data: Assessing LLMs and Diffusion Model for Wearable Fall Detection
Alamgeer, Sana, Souissi, Yasine, Ngu, Anne H. H.
Training fall detection systems is challenging due to the scarcity of real-world fall data, particularly from elderly individuals. To address this, we explore the potential of Large Language Models (LLMs) for generating synthetic fall data. This study evaluates text-to-motion (T2M, SATO, ParCo) and text-to-text models (GPT4o, GPT4, Gemini) in simulating realistic fall scenarios. We generate synthetic datasets and integrate them with four real-world baseline datasets to assess their impact on fall detection performance using a Long Short-Term Memory (LSTM) model. Additionally, we compare LLM-generated synthetic data with a diffusion-based method to evaluate their alignment with real accelerometer distributions. Results indicate that dataset characteristics significantly influence the effectiveness of synthetic data, with LLM-generated data performing best in low-frequency settings (e.g., 20Hz) while showing instability in high-frequency datasets (e.g., 200Hz). While text-to-motion models produce more realistic biomechanical data than text-to-text models, their impact on fall detection varies. Diffusion-based synthetic data demonstrates the closest alignment to real data but does not consistently enhance model performance. An ablation study further confirms that the effectiveness of synthetic data depends on sensor placement and fall representation. These findings provide insights into optimizing synthetic data generation for fall detection models.
- North America > United States > Texas > Hays County > San Marcos (0.04)
- North America > United States > North Carolina > Mecklenburg County > Charlotte (0.04)
- Europe > Switzerland (0.04)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Research Report > Experimental Study (0.68)
MELON: Multimodal Mixture-of-Experts with Spectral-Temporal Fusion for Long-Term Mobility Estimation in Critical Care
Zhang, Jiaqing, Contreras, Miguel, Sena, Jessica, Davidson, Andrea, Ren, Yuanfang, Guan, Ziyuan, Ozrazgat-Baslanti, Tezcan, Loftus, Tyler J., Nerella, Subhash, Bihorac, Azra, Rashidi, Parisa
Patient mobility monitoring in intensive care is critical for ensuring timely interventions and improving clinical outcomes. While accelerometry-based sensor data are widely adopted in training artificial intelligence models to estimate patient mobility, existing approaches face two key limitations highlighted in clinical practice: (1) modeling the long-term accelerometer data is challenging due to the high dimensionality, variability, and noise, and (2) the absence of efficient and robust methods for long-term mobility assessment. To overcome these challenges, we introduce MELON, a novel multimodal framework designed to predict 12-hour mobility status in the critical care setting. MELON leverages the power of a dual-branch network architecture, combining the strengths of spectrogram-based visual representations and sequential accelerometer statistical features. MELON effectively captures global and fine-grained mobility patterns by integrating a pre-trained image encoder for rich frequency-domain feature extraction and a Mixture-of-Experts encoder for sequence modeling. We trained and evaluated the MELON model on the multimodal dataset of 126 patients recruited from nine Intensive Care Units at the University of Florida Health Shands Hospital main campus in Gainesville, Florida. Experiments showed that MELON outperforms conventional approaches for 12-hour mobility status estimation with an overall area under the receiver operating characteristic curve (AUROC) of 0.82 (95\%, confidence interval 0.78-0.86). Notably, our experiments also revealed that accelerometer data collected from the wrist provides robust predictive performance compared with data from the ankle, suggesting a single-sensor solution that can reduce patient burden and lower deployment costs...
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)